1,227 research outputs found

    Some problems with a behavioristic account of early group pretense

    Get PDF
    Free to read on publishers website In normal child development, both individual and group pretense first emerges at approximately two years of age. The metarepresentational account of pretense holds that children already have the concept PRETEND when they first engage in early group pretense. A behavioristic account suggests that early group pretense is analogous to early beliefs or desires and thus require no mental state concepts. I argue that a behavioral account does not explain the actual behavior observed in children and it cannot explain how children come to understand that a specific action is one of pretense versus one of belief. I conclude that a mentalistic explanation of pretense best explains the behavior under consideration

    Cross-level Validation of Topological Quantum Circuits

    Full text link
    Quantum computing promises a new approach to solving difficult computational problems, and the quest of building a quantum computer has started. While the first attempts on construction were succesful, scalability has never been achieved, due to the inherent fragile nature of the quantum bits (qubits). From the multitude of approaches to achieve scalability topological quantum computing (TQC) is the most promising one, by being based on an flexible approach to error-correction and making use of the straightforward measurement-based computing technique. TQC circuits are defined within a large, uniform, 3-dimensional lattice of physical qubits produced by the hardware and the physical volume of this lattice directly relates to the resources required for computation. Circuit optimization may result in non-intuitive mismatches between circuit specification and implementation. In this paper we introduce the first method for cross-level validation of TQC circuits. The specification of the circuit is expressed based on the stabilizer formalism, and the stabilizer table is checked by mapping the topology on the physical qubit level, followed by quantum circuit simulation. Simulation results show that cross-level validation of error-corrected circuits is feasible.Comment: 12 Pages, 5 Figures. Comments Welcome. RC2014, Springer Lecture Notes on Computer Science (LNCS) 8507, pp. 189-200. Springer International Publishing, Switzerland (2014), Y. Shigeru and M.Shin-ichi (Eds.

    Theories of Reference: What Was the Question?

    Get PDF
    The new theory of reference has won popularity. However, a number of noted philosophers have also attempted to reply to the critical arguments of Kripke and others, and aimed to vindicate the description theory of reference. Such responses are often based on ingenious novel kinds of descriptions, such as rigidified descriptions, causal descriptions, and metalinguistic descriptions. This prolonged debate raises the doubt whether different parties really have any shared understanding of what the central question of the philosophical theory of reference is: what is the main question to which descriptivism and the causal-historical theory have presented competing answers. One aim of the paper is to clarify this issue. The most influential objections to the new theory of reference are critically reviewed. Special attention is also paid to certain important later advances in the new theory of reference, due to Devitt and others

    Bowen ratio estimates of evapotranspiration for stands on the Virgin River in Southern Nevada

    Get PDF
    A Bowen ratio energy balance was conducted over a Tamarix ramosissima (saltcedar) stand growing in a riparian corridor along the Virgin River in southern Nevada. Measurements in two separate years were compared and contrasted on the basis of changes in growing conditions. In 1994, a drought year, record high temperatures, dry winds, and a falling water table caused partial wilt of outer smaller twigs in the canopy of many trees in the stand around the Bowen tower. Subsequently, evapotranspiration (ET) estimates declined dramatically over a 60‐day period (11 mm d−1 tod−1). In 1995, the Virgin River at the Bowen tower area changed its course, hydrologically isolating the Tamarix stand in the vicinity of the tower. In 1996, a 25% canopy loss was visually estimated for the Tamarix growing in the area of the tower. Higher soil temperatures relative to air temperatures were recorded in 1996 in response to this loss in canopy. With a more open canopy, thermally induced turbulence was observed in 1996. On day 160 of 1996, a 28°C rise over a 9‐hour period was correlated with increased wind speeds of greater than 4 m s−1. Subsequently, higher ET estimates were made in 1996 compared to 1994 (145 cm versus 75 cm). However, the energy balance was dominated by advection in 1996, with latent energy flux exceeding net radiation 65% of the measurement days compared to only 11% in 1994. We believe this advection was on a scale of the floodplain (hundreds of meters) as opposed to regional advection, since the majority of wind (90%) was in a N–S direction along the course of the river, and that a more open canopy allowed the horizontal transfer of energy into the Tamarix stand at the Bowen tower. Our results suggest that Tamarix has the potential to be both a low water user and a high water user, depending on moisture availability, canopy development, and atmospheric demand, and that advection can dominate energy balances and ET in aridland riparian zones such as the Virgin River

    AI Ethics Needs Good Data

    Get PDF
    In this chapter we argue that discourses on AI must transcend the language of 'ethics' and engage with power and political economy in order to constitute 'Good Data'. In particular, we must move beyond the depoliticised language of 'ethics' currently deployed (Wagner 2018) in determining whether AI is 'good' given the limitations of ethics as a frame through which AI issues can be viewed. In order to circumvent these limits, we use instead the language and conceptualisation of 'Good Data', as a more expansive term to elucidate the values, rights and interests at stake when it comes to AI's development and deployment, as well as that of other digital technologies. Good Data considerations move beyond recurring themes of data protection/privacy and the FAT (fairness, transparency and accountability) movement to include explicit political economy critiques of power. Instead of yet more ethics principles (that tend to say the same or similar things anyway), we offer four 'pillars' on which Good Data AI can be built: community, rights, usability and politics. Overall we view AI's 'goodness' as an explicly political (economy) question of power and one which is always related to the degree which AI is created and used to increase the wellbeing of society and especially to increase the power of the most marginalized and disenfranchised. We offer recommendations and remedies towards implementing 'better' approaches towards AI. Our strategies enable a different (but complementary) kind of evaluation of AI as part of the broader socio-technical systems in which AI is built and deployed.Comment: 20 pages, under peer review in Pieter Verdegem (ed), AI for Everyone? Critical Perspectives. University of Westminster Pres
    • 

    corecore